fix(langchain): ensure llm spans are created for sync cases#3094
fix(langchain): ensure llm spans are created for sync cases#3094droidnxs wants to merge 1 commit intotraceloop:mainfrom
Conversation
There was a problem hiding this comment.
Caution
Changes requested ❌
Reviewed everything up to 363a42a in 1 minute and 59 seconds. Click for details.
- Reviewed
990lines of code in11files - Skipped
1files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/callback_handler.py:254
- Draft comment:
Replace 'if generation.message.content is str:' with an isinstance check (e.g. isinstance(generation.message.content, str)) to correctly verify the type. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py:240
- Draft comment:
Typo: Consider changing "this should helps as a fallback" to "this should help as a fallback". - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% While the comment is technically correct about the grammar, it's an extremely minor issue in a comment. Comments are documentation, not code, and this small grammatical error doesn't impact understanding. The rules say not to make comments that are obvious or unimportant. The grammar error could be seen as unprofessional, and fixing it would improve code quality slightly. Bad grammar in comments could set a bad precedent. While professional code is important, this is an extremely minor issue that doesn't impact functionality or readability. The meaning is perfectly clear despite the small grammar error. Delete this comment as it's too minor and doesn't meaningfully improve the code quality.
3. packages/sample-app/sample_app/langgraph_openai.py:20
- Draft comment:
Possible typo: The model is set to "gpt-4o". Did you mean "gpt-4"? - Reason this comment was not posted:
Marked as duplicate.
Workflow ID: wflow_SBbZH21UJWkc3poP
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
|
|
||
| # In legacy chains like LLMChain, suppressing model instrumentations | ||
| # within create_llm_span doesn't work, so this should helps as a fallback | ||
| context_api.attach( |
There was a problem hiding this comment.
Consider capturing the token returned by context_api.attach and detaching it after the wrapped call to limit the suppression flag's scope. This helps avoid unintended propagation in subsequent calls.
| def calculate(state: State): | ||
| request = state["request"] | ||
| completion = openai_client.chat.completions.create( | ||
| model="gpt-4o", |
There was a problem hiding this comment.
Typo suggestion: The model string "gpt-4o" on this line might be a typographical error. If it was intended to be "gpt-4", please update accordingly.
| model="gpt-4o", | |
| model="gpt-4", |
|
Hi @nirga, I'm running busy with some critical work this week and next. If you could pick it up, that'll be really helpful. |
|
@obs-gh-abhishekrao Thanks for your work! I can pick this up. I'll rebase the commits, resolve the conflicts, and submit a new PR. |
Continuation of #2805
feat(instrumentation): ...orfix(instrumentation): ....Fixes #2271
Reproducible code from bug report
Courtesy #2271. Thanks @jemo21k!
Expected behaviour: LLM span should be created.
Before
No

openai.chatspan.After
openai.chatspan is present.sample_app/langgraph_example
Expected behaviour: No change. Additional LLM span should not be created since
ChatOpenAIcallback already creates one.Before
After
Important
Fixes LLM span creation for synchronous cases in Langchain and adds tests to verify behavior.
_create_llm_span()incallback_handler.py.__call__()in__init__.pyand_create_llm_span()incallback_handler.py.test_langgraph.pyto verify span creation for sync and async cases.tests/cassettes/test_langgraph/to mock OpenAI API responses.test_langchain_metrics.pyto include metrics tests for Langgraph.pyproject.tomlto includelanggraphdependency.This description was created by
for 363a42a. You can customize this summary. It will automatically update as commits are pushed.